The Infona portal uses cookies, i.e. strings of text saved by a browser on the user's device. The portal can access those files and use them to remember the user's data, such as their chosen settings (screen view, interface language, etc.), or their login data. By using the Infona portal the user accepts automatic saving and using this information for portal operation purposes. More information on the subject can be found in the Privacy Policy and Terms of Service. By closing this window the user confirms that they have read the information on cookie usage, and they accept the privacy policy and the way cookies are used by the portal. You can change the cookie settings in your browser.
A simple and effective method for visual saliency detection in colour images is presented. The method is based on the common observation that local salient regions exhibit distinct geometric and texture patterns from neighbouring regions. The colour distribution of local image patches is modelled with a Gaussian density and measure the saliency of each patch as the statistical distance from that density...
In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately,...
We propose using multi-layer multiple instance learning (MMIL) for image set classification and applying it to the task of cannabis website classification. We treat each image as an instance in an image set, then each image is further viewed as containing instances of local image patches. This representation naturally extends traditional multiple instance learning (MIL) to multi-layers. We then show...
Visual vocabulary serves as a fundamental component in many computer vision tasks, such as object recognition, visual search, and scene modeling. While state-of-the-art approaches build visual vocabulary based solely on visual statistics of local image patches, the correlative image labels are left unexploited in generating visual words. In this work, we present a semantic embedding framework to integrate...
This paper present a part-based approach for detecting objects with large variation of appearance. We extract local image patches as local features both from the object and from the background in training images to learn an object part model discriminatively. Our object part model discriminates the local features whether they are an object part or not. Based on the discrimination results, each local...
This paper proposes a general feature selection approach for real-time image matching systems. To demonstrate the idea??s effectiveness, we focus on the issue of rotational invariance. Most current image matching methods compute and align local image patches to a uniform dominant orientation, which are either too computationally expensive for real-time systems or insufficiently robust. In contrast...
This paper describes a coarse-to-fine learning based image registration algorithm which has particular advantages in dealing with multi-modality images. Many existing image registration algorithms use a few designed terms or mutual information to measure the similarity between image pairs. Instead, we push the learning aspect by selecting and fusing a large number of features for measuring the similarity...
The notion of using context information for solving high-level vision problems has been increasingly realized in the field. However, how to learn an effective and efficient context model, together with the image appearance, remains mostly unknown. The current literature using Markov random fields (MRFs) and conditional random fields (CRFs) often involves specific algorithm design, in which the modeling...
Set the date range to filter the displayed results. You can set a starting date, ending date or both. You can enter the dates manually or choose them from the calendar.